A familiar problem in calculus is to define a recursive sequence, say \(x_1=1\) and \(x_{n+1}=1+1/x_n\) (for \(n\ge 1\)), and find its limit. If we suppose the limit exists, say \(x=\lim x_n\), we can take the limit of the recursion relation to find an equation for the limit: \(x=1+1/x\). If we clear the fraction, we have a quadratic, \(x^2-x-1=0\), with solutions \(x = \tfrac12\left(1\pm\sqrt{5}\,\right)\). Of course, there are two solutions, but the terms of the sequence are all positive, so (assuming the limit does exist), we conclude \(x=\tfrac12\left(1+\sqrt{5}\,\right)\). If we actually compute the terms of the sequence, we see this number is the correct limit of the sequence: \(x_{16}=1.618034\) (which agrees with the true value to the places shown).
Now this a nice exercise in applying limit theorems (such as \(\lim 1/x_n=1/x\) if $\lim x_n=x$), but it depends on assuming the limit exists (it's a more challenging exercise to prove the limit does exist). However, the problem just described was rigged to be particularly simple: the quadratic had two roots, one positive and one negative. So it was easy to decide which root is the correct limit.
It occurred to me to ask, is there a recursion where it is not so easy to decide which of two roots is correct? I quickly found a nice example: \(x_1=\frac32\), \(x_{n+1}=3-2/x_n\). Here, we clear the fraction and take the limit, to arrive at the quadratic \(x^2 - 3x + 2=0\), which has two positive solutions: \(x=1\) and \(x=2\). Both of these numbers are therefore candidates for the limit of the sequence; the initial term \(x_1=\frac32\) is half-way between these two possibilities, and gives no hint as to what the sequence actually does.
It turns out the limit is \(2\). This becomes apparent if we compute five or ten terms of the sequence. But we can prove this by showing the sequence is increasing, and bounded above by \(2\); these can be verified by mathematical induction. This relies on the following fact: If \(1\lt x\lt 2\), then \(x\lt 3-x/2\lt 2\).
The limit is also \(2\) if we begin with \(x_1 \gt 2\), and can be proved in the same way. But things are not as simple if \(x\lt 1\). What tends to happen is the first two or three terms bounce around, until a term above \(1\) appears; the sequence then converges to \(2\).
Of course, if \(x_1=0\), the next term is undefined. It happens that if \(x_1=\tfrac23\), then \(x_2=0\), and the sequence terminates. If \(x_2=\tfrac67\), then \(x_1=\tfrac23\). Backtracking in this way, we find a sequence of numbers that lead to \(0\), namely, \(y_n=(2^n-2)/(2^n-1)\). Given that numbers close to \(0\), or close to one of these terms, will get kicked out to a large negative or positive value, we get the impression that the dynamics are complicated when the initial term is smaller than \(1\).
I'm not certain, but I believe the only initial term that leads to a limit of \(1\) is \(x_1=1\) (so the sequence ends up being constantly \(1\)). In other words, if the sequence converges, it converges to \(2\); and this happens if the initial term is greater than \(1\).
So what began as a nice little problem in a calculus course gets more interesting if it is modified just a bit. This might be a good topic for a student to explore, although these kind of recursions are well-trodden ground, especially with regards to chaos theory.